65 research outputs found
Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles
There are still many challenges of emotion recognition using physiological
data despite the substantial progress made recently. In this paper, we
attempted to address two major challenges. First, in order to deal with the
sparsely-labeled physiological data, we first decomposed the raw physiological
data using signal spectrum analysis, based on which we extracted both
complexity and energy features. Such a procedure helped reduce noise and
improve feature extraction effectiveness. Second, in order to improve the
explainability of the machine learning models in emotion recognition with
physiological data, we proposed Light Gradient Boosting Machine (LightGBM) and
SHapley Additive exPlanations (SHAP) for emotion prediction and model
explanation, respectively. The LightGBM model outperformed the eXtreme Gradient
Boosting (XGBoost) model on the public Database for Emotion Analysis using
Physiological signals (DEAP) with f1-scores of 0.814, 0.823, and 0.860 for
binary classification of valence, arousal, and liking, respectively, with
cross-subject validation using eight peripheral physiological signals.
Furthermore, the SHAP model was able to identify the most important features in
emotion recognition, and revealed the relationships between the predictor
variables and the response variables in terms of their main effects and
interaction effects. Therefore, the results of the proposed model not only had
good performance using peripheral physiological data, but also gave more
insights into the underlying mechanisms in recognizing emotions
Modality Completion via Gaussian Process Prior Variational Autoencoders for Multi-Modal Glioma Segmentation
In large studies involving multi protocol Magnetic Resonance Imaging (MRI),
it can occur to miss one or more sub-modalities for a given patient owing to
poor quality (e.g. imaging artifacts), failed acquisitions, or hallway
interrupted imaging examinations. In some cases, certain protocols are
unavailable due to limited scan time or to retrospectively harmonise the
imaging protocols of two independent studies. Missing image modalities pose a
challenge to segmentation frameworks as complementary information contributed
by the missing scans is then lost. In this paper, we propose a novel model,
Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute
one or more missing sub-modalities for a patient scan. MGP-VAE can leverage the
Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the
subjects/patients and sub-modalities correlations. Instead of designing one
network for each possible subset of present sub-modalities or using frameworks
to mix feature maps, missing data can be generated from a single model based on
all the available samples. We show the applicability of MGP-VAE on brain tumor
segmentation where either, two, or three of four sub-modalities may be missing.
Our experiments against competitive segmentation baselines with missing
sub-modality on BraTS'19 dataset indicate the effectiveness of the MGP-VAE
model for segmentation tasks.Comment: Accepted in MICCAI 202
- …